Goto

Collaborating Authors

 artificial intelligence and data act


How Should the Law Treat Future AI Systems? Fictional Legal Personhood versus Legal Identity

Alexander, Heather J., Simon, Jonathan A., Pinard, Frédéric

arXiv.org Artificial Intelligence

The law draws a sharp distinction between objects and persons, and between two kinds of persons, the ''fictional'' kind (i.e. corporations), and the ''non-fictional'' kind (individual or ''natural'' persons). This paper will assess whether we maximize overall long-term legal coherence by (A) maintaining an object classification for all future AI systems, (B) creating fictional legal persons associated with suitably advanced, individuated AI systems (giving these fictional legal persons derogable rights and duties associated with certified groups of existing persons, potentially including free speech, contract rights, and standing to sue ''on behalf of'' the AI system), or (C) recognizing non-fictional legal personhood through legal identity for suitably advanced, individuated AI systems (recognizing them as entities meriting legal standing with non-derogable rights which for the human case include life, due process, habeas corpus, freedom from slavery, and freedom of conscience). We will clarify the meaning and implications of each option along the way, considering liability, copyright, family law, fundamental rights, civil rights, citizenship, and AI safety regulation. We will tentatively find that the non-fictional personhood approach may be best from a coherence perspective, for at least some advanced AI systems. An object approach may prove untenable for sufficiently humanoid advanced systems, though we suggest that it is adequate for currently existing systems as of 2025. While fictional personhood would resolve some coherence issues for future systems, it would create others and provide solutions that are neither durable nor fit for purpose. Finally, our review will suggest that ''hybrid'' approaches are likely to fail and lead to further incoherence: the choice between object, fictional person and non-fictional person is unavoidable.


The Artificial Intelligence and Data Act (AIDA) – Companion document

#artificialintelligence

Artificial intelligence (AI) systems are poised to have a significant impact on the lives of Canadians and the operations of Canadian businesses. The AIDA represents an important milestone in implementing the Digital Charter and ensuring that Canadians can trust the digital technologies that they use every day. The design, development, and use of AI systems must be safe, and must respect the values of Canadians. The framework proposed in the AIDA is the first step towards a new regulatory system designed to guide AI innovation in a positive direction, and to encourage the responsible adoption of AI technologies by Canadians and Canadian businesses. The Government intends to build on this framework through an open and transparent regulatory development process. Consultations would be organized to gather input from a variety of stakeholders across Canada to ensure that the regulations achieve outcomes aligned with Canadian values. The global interconnectedness of the digital economy requires that the regulation of AI systems in the marketplace be coordinated internationally. Canada has drawn from and will work together with international partners – such as the European Union (EU), the United Kingdom, and the United States (US) – to align approaches, in order to ensure that Canadians are protected globally and that Canadian firms can be recognized internationally as meeting robust standards.


2023 Will Be The Year Of AI Ethics Legislation Acceleration

#artificialintelligence

Ethical AI will need careful planting of many ecosystems. Ethical AI has been a concern of AI leaders, and practitioners for many years, but finally it seems, global jurisdictions are starting to move from policy formulation and stakeholder engagement to putting some teeth into drafting legal bills or acts. Expect many new laws to pass in 2023, tightening up citizen privacy and creating risk frameworks and audit requirements for data bias, privacy and security risks. At the same time, regulators are going to have to evolve an entire global ecosystem to ensure AI audits are effectively conducted and many questions loom as to who will validate certifications for AI audit practices and will we over burden AI innovations like we have done in so many other regulated operating practices that the risk and costs of non-conformance inhibit's innovation and capital funding? Finding a balance will be key.


Opinion

#artificialintelligence

The question has arisen with escalating frequency in recent years, a sort of journalistic thought bubble emerging from the collective consciousness of writers. Will artificial intelligence (AI) save humanity, or supplant us? On the one hand, we are told that AI holds the potential to solve some of the world's biggest problems -- challenges like poverty, food insecurity, inequality and climate change. On the other hand, some very smart people have issued warnings. Stephen Hawking said the technology could "spell the end of the human race."

  artificial intelligence, artificial intelligence and data act, governance, (10 more...)
  Country:
  Industry: